Making Software: What Really Works, and Why We Believe It
Authors: Andy Oram, Greg Wilson
Overview
Making Software: What Really Works, and Why We Believe It, edited by Andy Oram and Greg Wilson, is a collection of essays that explore the application of evidence-based approaches to software development. Written by leading researchers and practitioners in the field, the book tackles a wide range of topics, from general principles of evidence gathering and evaluation to specific issues such as programming languages, test-driven development, and the role of personality and intelligence in software engineering. The book is structured in two parts: the first part focuses on general principles of evidence-based software engineering, while the second delves into specific issues in software development. A key theme throughout the book is the importance of moving beyond anecdotal evidence and “common knowledge” to make decisions based on sound empirical data. The book highlights the challenges in applying traditional scientific methods to software engineering, due to the complex, human-centric, and constantly evolving nature of the field. It emphasizes the importance of context-specific evidence and advocates for the creation of data repositories and tools that facilitate sharing and reanalysis of data. The book also challenges readers to critically evaluate the evidence they encounter, both in the literature and in their own work, and to embrace an evidence-based mindset in order to improve their practice and make better decisions. The book’s intended audience is broad, encompassing software developers, testers, managers, researchers, and students. Its relevance stems from its attempt to bring more rigor and objectivity to a field often dominated by opinions, fads, and untested assumptions. By advocating for an evidence-based approach, the book aims to contribute to the maturation of software engineering as a discipline. Its contribution to the ongoing discussion of how to improve software quality and productivity, particularly in complex and challenging environments like global software development, makes it a valuable read for anyone interested in building better software.
Chapter Outline
1. The Quest for Convincing Evidence
This chapter emphasizes the importance of evidence-based decision making in software engineering. It discusses how the pursuit of elegant, statistically sound, and replicable evidence often faces challenges in software engineering research. The chapter argues for context-specific evidence that motivates change and for the creation of software engineering data repositories.
Key concept: Convincing evidence motivates change.
2. Credibility, or Why Should I Insist on Being Convinced?
This chapter focuses on the critical evaluation of evidence in software engineering, emphasizing critical thinking. It introduces the concepts of credibility and relevance of evidence, arguing that different purposes demand different levels of evidence. The chapter also delves into quantitative and qualitative research methods, presenting them as complementary approaches. It highlights the importance of benchmarking for evaluating software.
Key concept: Software development is similar [to detective work]: evidence emerges over time, and the quality of the engineering hinges on the critical faculties of the engineer.
3. What We Can Learn from Systematic Reviews
This chapter presents systematic reviews (SRs) as a rigorous methodology for aggregating evidence in software engineering, comparing it to evidence-based medicine. The chapter outlines the benefits of SRs, highlighting their ability to overcome limitations of traditional literature reviews and argues for their widespread adoption in the software engineering research community.
Key concept: We cannot have evidence-based software engineering without a sound methodology for aggregating evidence from different empirical studies.
4. Understanding Software Engineering Through Qualitative Methods
This chapter champions qualitative methods for understanding the “why” and “how” questions in software engineering, arguing that numbers alone cannot explain complex phenomena. It presents a practical example of using qualitative methods to investigate why a software team might resist adopting a new technology, emphasizing the importance of triangulating evidence from multiple sources.
Key concept: No one approach will reveal the whole unbiased truth. Instead, good qualitative research combines multiple methods, allowing one to triangulate evidence from multiple sources.
5. Learning Through Application: The Maturing of the QIP in the SEL
This chapter describes the experiences of the NASA Software Engineering Laboratory (SEL) in applying empirical research methods to improve software development processes. It emphasizes the importance of learning through application, recognizing that software engineering is an exploratory science. The chapter also highlights the key role of the Quality Improvement Paradigm (QIP) and the Experience Factory in evolving software development practices.
Key concept: We are, in short, an exploratory science: we are more dependent on empirical application of methods and techniques than many disciplines.
6. Personality, Intelligence, and Expertise: Impacts on Software Development
This chapter investigates the role of individual traits, such as personality and intelligence, in software development, particularly programming. It examines whether these fixed characteristics can predict good programming performance and if they can be used to identify good programmers. The chapter argues that although intelligence is important for skill acquisition, skills and expertise related to the specific task are stronger predictors of performance. The chapter also explores the concepts of task complexity and the role of the environment in facilitating good performance, advocating for environmental measures like pair programming and collaboration to enhance software development.
Key concept: Success calls for both intelligence and skill.
7. Why Is It So Hard to Learn to Program?
This chapter examines why it is hard to learn to program, citing research that shows high failure rates in introductory programming courses. The chapter discusses the difficulty of evaluating student learning and the potential of visual programming languages. It argues that the problem of teaching programming may lie more in the tools and teaching methods than in any inherent difficulty of programming itself.
Key concept: Students may be more capable of computational thinking than we give them credit for.
8. Beyond Lines of Code: Do We Need More Complexity Metrics?
This chapter delves into the world of software complexity metrics, questioning whether we need more sophisticated measures beyond simple lines of code. Analyzing a large dataset of C code, the chapter finds that traditional complexity metrics such as McCabe’s cyclomatic complexity and Halstead’s metrics are highly correlated with lines of code, suggesting that they do not provide significant additional information. The chapter concludes that syntactic complexity alone cannot capture the full picture of software complexity and emphasizes the importance of considering semantic complexity and other factors.
Key concept: Syntactic complexity metrics cannot capture the whole picture of software complexity.
Essential Questions
1. Why is an evidence-based approach crucial in software engineering?
The book emphasizes that relying solely on intuition and anecdotal evidence can be misleading. It argues for a systematic approach to gathering and analyzing data to understand what works and why. This involves understanding the context, applying appropriate research methods (both quantitative and qualitative), and considering the limitations of the evidence.
2. What are the challenges in applying empirical research methods to software engineering?
The book highlights various challenges. These include the difficulty of designing controlled experiments that reflect real-world software development practices, the lack of consensus on appropriate statistical methods, the inherent biases in data collection and analysis, and the need for more research on a wider range of topics and programming languages.
3. How can software engineering data repositories help advance the field?
The book advocates for a shift towards creating and using software engineering data repositories. These repositories would provide a central location for storing and sharing data from various studies, facilitating reanalysis, replication, and generalization of findings. The book presents examples of existing repositories, such as the PROMISE repository and the NASA Software Engineering Laboratory (SEL), as models for future efforts.
Key Takeaways
1. The effectiveness of software development practices is context-dependent.
The book emphasizes that the effectiveness of software development practices is context-specific. For example, the effectiveness of pair programming depends on factors such as task complexity and the experience of the programmers involved.
Practical Application:
When considering adopting a new technology or practice, such as test-driven development (TDD), AI product engineers should look for empirical evidence supporting its effectiveness in similar contexts. They should also consider the limitations of the evidence and be cautious about generalizing findings from one context to another.
2. Human factors, such as cognitive styles and work habits, significantly impact software development.
The book highlights the importance of understanding the human factors that influence software development. It argues that there is no single “best” way to design software, as the optimal approach depends on the cognitive styles and work habits of the programmers involved.
Practical Application:
When designing or evaluating software interfaces, AI product engineers should consider the different cognitive styles and work habits of different developer personas. An interface that is usable for one type of developer might not be usable for another. Scenario-based design and usability studies can help ensure an API is usable for its intended audience.
3. The organizational structure of a development team can significantly impact software quality.
The book highlights Conway’s Law, which states that “organizations that design systems are constrained to produce systems which are copies of the communication structures of these organizations.” It also presents evidence suggesting that aligning the team structure with the architecture of the software can lead to better outcomes in terms of quality and productivity.
Practical Application:
When managing a team of AI engineers, a manager should consider the organizational structure of the team and the communication patterns that emerge from it. The structure of the team can have a significant impact on the quality of the software produced, and aligning the team structure with the architecture of the software can lead to better outcomes.
Suggested Deep Dive
Chapter: Chapter 20: Identifying and Managing Dependencies in Global Software Development
Given the increasing prominence of global software development and its inherent challenges in coordinating work and managing dependencies, this chapter provides valuable insights for AI product engineers who may be involved in distributed teams or projects. The chapter emphasizes the concept of socio-technical congruence and presents practical strategies for identifying and mitigating coordination breakdowns.
Comparative Analysis
This book stands out for its focus on the practical application of empirical research in software engineering, contrasting with more theoretical or method-focused works. Unlike books like “Code Complete” by Steve McConnell, which provides comprehensive guidelines for code construction, or “The Mythical Man-Month” by Fred Brooks, which focuses on the challenges of managing large software projects, “Making Software” goes a step further by questioning widely held assumptions in the field and presenting evidence-based answers, drawing on studies from various domains, including psychology, sociology, and cognitive science. The book echoes the call for evidence-based software engineering found in works like “Evidence-Based Software Engineering” by Barbara Kitchenham and “A Handbook of Software and Systems Engineering” by Endres and Rombach, but it provides a more accessible and practical guide for practitioners. The book also aligns with works highlighting the importance of human factors in software engineering, such as “Peopleware” by DeMarco and Lister and “Software Psychology” by Shneiderman, by dedicating chapters to the role of personality, intelligence, communication, and collaboration in software development.
Reflection
This book serves as a call to action for the software engineering community to adopt a more data-driven and evidence-based approach. This call is particularly relevant in the era of AI, where the complexity of the systems being developed demands greater rigor and objectivity in decision making. While the book presents compelling evidence for the benefits of evidence-based software engineering, it also acknowledges the limitations of current research and the challenges in applying traditional scientific methods to a field as complex and dynamic as software development. Skeptics might argue that the studies presented in the book are too limited in scope or that the findings cannot be generalized to other contexts. They might also point out that the book does not provide clear-cut answers to all the questions it raises. However, the book’s strength lies not in providing definitive solutions, but in raising important questions, presenting evidence, and challenging readers to think critically about how they make decisions in their work. This emphasis on critical thinking and the continuous evaluation of evidence is crucial for the advancement of software engineering as a discipline, particularly as AI becomes an increasingly integral part of software development.